ZipNet-GAN: Inferring Fine-grained Mobile Traic Paerns via a Generative Adversarial Neural Network

نویسندگان

  • Chaoyun Zhang
  • Xi Ouyang
  • Paul Patras
چکیده

Large-scale mobile trac analytics is becoming essential to digital infrastructure provisioning, public transportation, events planning, and other domains. Monitoring city-wide mobile trac is however a complex and costly process that relies on dedicated probes. Some of these probes have limited precision or coverage, others gather tens of gigabytes of logs daily, which independently o‚er limited insights. Extracting €ne-grained paŠerns involves expensive spatial aggregation of measurements, storage, and post-processing. In this paper, we propose a mobile trac super-resolution technique that overcomes these problems by inferring narrowly localised trac consumption from coarse measurements. We draw inspiration from image processing and design a deep-learning architecture tailored to mobile networking, which combines Zipper Network (ZipNet) and Generative Adversarial neural Network (GAN) models. Œis enables to uniquely capture spatio-temporal relations between trac volume snapshots routinely monitored over broad coverage areas (‘low-resolution’) and the corresponding consumption at 0.05 km2 level (‘high-resolution’) usually obtained a‰er intensive computation. Experiments we conduct with a real-world data set demonstrate that the proposed ZipNet(-GAN) infers trac consumption with remarkable accuracy and up to 100× higher granularity as compared to standard probing, while outperforming existing data interpolation techniques. To our knowledge, this is the €rst time super-resolution concepts are applied to large-scale mobile trac analysis and our solution is the €rst to infer €ne-grained urban trac paŠerns from coarse aggregates.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks

In this paper, we propose an Attentional Generative Adversarial Network (AttnGAN) that allows attention-driven, multi-stage refinement for fine-grained text-to-image generation. With a novel attentional generative network, the AttnGAN can synthesize fine-grained details at different subregions of the image by paying attentions to the relevant words in the natural language description. In additi...

متن کامل

Adversarial Generation of Training Examples for Vehicle License Plate Recognition

Generative Adversarial Networks (GAN) have attracted much research attention recently, leading to impressive results for natural image generation. However, to date little success was observed in using GAN generated images for improving classification tasks. Here we attempt to explore, in the context of car license plate recognition, whether it is possible to generate synthetic training data usi...

متن کامل

Automatic Colorization of Grayscale Images Using Generative Adversarial Networks

Automatic colorization of gray scale images poses a unique challenge in Information Retrieval. The goal of this field is to colorize images which have lost some color channels (such as the RGB channels or the AB channels in the LAB color space) while only having the brightness channel available, which is usually the case in a vast array of old photos and portraits. Having the ability to coloriz...

متن کامل

Text Generation Based on Generative Adversarial Nets with Latent Variable

In this paper, we propose a model using generative adversarial net (GAN) to generate realistic text. Instead of using standard GAN, we combine variational autoencoder (VAE) with generative adversarial net. The use of high-level latent random variables is helpful to learn the data distribution and solve the problem that generative adversarial net always emits the similar data. We propose the VGA...

متن کامل

CapsuleGAN: Generative Adversarial Capsule Network

We present Generative Adversarial Capsule Network (CapsuleGAN), a framework that uses capsule networks (CapsNets) instead of the standard convolutional neural networks (CNNs) as discriminators within the generative adversarial network (GAN) setting, while modeling image data. We provide guidelines for designing CapsNet discriminators and the updated GAN objective function, which incorporates th...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2017